全球城市可免费获得大量的地理参考全景图像,以及各种各样的城市物体上的位置和元数据的详细地图。它们提供了有关城市物体的潜在信息来源,但是对象检测的手动注释是昂贵,费力和困难的。我们可以利用这种多媒体来源自动注释街道级图像作为手动标签的廉价替代品吗?使用Panorams框架,我们引入了一种方法,以根据城市上下文信息自动生成全景图像的边界框注释。遵循这种方法,我们仅以快速自动的方式从开放数据源中获得了大规模的(尽管嘈杂,但都嘈杂,但对城市数据集进行了注释。该数据集涵盖了阿姆斯特丹市,其中包括771,299张全景图像中22个对象类别的1400万个嘈杂的边界框注释。对于许多对象,可以从地理空间元数据(例如建筑价值,功能和平均表面积)获得进一步的细粒度信息。这样的信息将很难(即使不是不可能)单独根据图像来获取。为了进行详细评估,我们引入了一个有效的众包协议,用于在全景图像中进行边界框注释,我们将其部署以获取147,075个地面真实对象注释,用于7,348张图像的子集,Panorams-clean数据集。对于我们的Panorams-Noisy数据集,我们对噪声以及不同类型的噪声如何影响图像分类和对象检测性能提供了广泛的分析。我们可以公开提供数据集,全景噪声和全景清洁,基准和工具。
translated by 谷歌翻译
深度学习模型在识别医学图像中的发现方面表现出了极大的有效性。但是,他们无法处理不断变化的临床环境,从而带来了来自不同来源的新注释的医学数据。为了利用传入的数据流,这些模型将在很大程度上受益于从新样本中依次学习,而不会忘记先前获得的知识。在本文中,我们通过应用现有的最新持续学习方法介绍了MedMnist收集中连续疾病分类的基准。特别是,我们考虑了三种连续的学习方案,即任务和班级增量学习以及新定义的跨域增量学习。疾病的任务和班级增量学习解决了对新样本进行分类的问题,而无需重新从头开始模型,而跨域增量学习解决了处理源自不同机构的数据集的问题,同时保留了先前获得的知识。我们对表现进行彻底的分析,并研究如何在这种情况下表现出灾难性遗忘的持续学习挑战。令人鼓舞的结果表明,持续学习具有推进疾病分类并为临床环境产生更强大,更有效的学习框架的主要潜力。将公开提供完整基准测试的代码存储库,数据分区和基线结果。
translated by 谷歌翻译
视觉地位识别(VPR)通常关注本地化室外图像。但是,本地化包含部分户外场景的室内场景对于各种应用来说可能具有很大的值。在本文中,我们介绍了内部视觉地点识别(IOVPR),一个任务,旨在通过Windows可见的户外场景本地化图像。对于此任务,我们介绍了新的大型数据集Amsterdam-XXXL,在阿姆斯特丹拍摄的图像,由640万全景街头视图图像和1000个用户生成的室内查询组成。此外,我们介绍了一个新的培训协议,内部数据增强,以适应视觉地点识别方法,以便展示内外视觉识别的潜力。我们经验展示了我们提出的数据增强方案的优势,较小的规模,同时展示了现有方法的大规模数据集的难度。通过这项新任务,我们旨在鼓励为IOVPR制定方法。数据集和代码可用于HTTPS://github.com/saibr/iovpr的研究目的
translated by 谷歌翻译
神经过程最近被出现为一类强大的神经潜变模型,这些模型结合了神经网络和随机过程的优势。由于它们可以编码网络功能空间中的上下文数据,因此它们为多任务学习中的任务相关性提供了一种新方法。为了研究其潜力,我们开发多任务神经过程,是多任务学习的神经过程的新变种。特别是,我们建议探索功能空间中相关任务的可转让知识,以提供用于改善每个任务的归纳偏差。为此,我们派生在分层贝叶斯推理框架中的功能前导者,它使每个任务能够将相关任务提供的共享知识结合到预测函数的上下文中。我们的多任务神经工艺方法展开了Vanilla神经过程的范围,并提供了一种探索功能空间任务相关性的新方法,以获得多任务学习。所提出的多任务神经过程能够学习具有有限标记数据和域移位的有限的多个任务。我们对多任务回归和分类任务的几个基准进行了大量的实验评估。结果展示了多任务神经过程在多任务学习任务中转移有用知识的有效性以及多任务分类和大脑图像分割中的优越性。
translated by 谷歌翻译
多任务学习旨在探索任务相关性,以改善各个任务,这在挑战性方案中是特别重要的,只有每个任务只有有限的数据。为了解决这一挑战,我们提出了变分的多任务学习(VMTL),是用于学习多个相关任务的一般概率推断框架。我们将多项任务学习作为变分贝叶斯推理问题,其中通过指定前沿以统一的方式探讨任务相关性。为了将共享知识合并到每个任务中,我们将任务的前期设计为可被学习的其他相关任务的变分后部的混合,这是由Gumbel-Softmax技术学习的。与以前的方法相比,我们的VMTL可以通过联合推断出后视前推断出的方式,我们的VMTL可以以原则的方式利用两个表示和分类器的任务相关性。这使得各个任务能够完全利用相关任务提供的归纳偏差,因此提高了所有任务的整体性能。实验结果表明,所提出的VMTL能够有效地解决各种具有挑战性的多任务学习设置,其中包括分类和回归的有限训练数据。我们的方法始终如一地超越以前的方法,包括强烈的贝叶斯方法,并在五个基准数据集中实现最先进的性能。
translated by 谷歌翻译
We describe a Physics-Informed Neural Network (PINN) that simulates the flow induced by the astronomical tide in a synthetic port channel, with dimensions based on the Santos - S\~ao Vicente - Bertioga Estuarine System. PINN models aim to combine the knowledge of physical systems and data-driven machine learning models. This is done by training a neural network to minimize the residuals of the governing equations in sample points. In this work, our flow is governed by the Navier-Stokes equations with some approximations. There are two main novelties in this paper. First, we design our model to assume that the flow is periodic in time, which is not feasible in conventional simulation methods. Second, we evaluate the benefit of resampling the function evaluation points during training, which has a near zero computational cost and has been verified to improve the final model, especially for small batch sizes. Finally, we discuss some limitations of the approximations used in the Navier-Stokes equations regarding the modeling of turbulence and how it interacts with PINNs.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Topic modeling is widely used for analytically evaluating large collections of textual data. One of the most popular topic techniques is Latent Dirichlet Allocation (LDA), which is flexible and adaptive, but not optimal for e.g. short texts from various domains. We explore how the state-of-the-art BERTopic algorithm performs on short multi-domain text and find that it generalizes better than LDA in terms of topic coherence and diversity. We further analyze the performance of the HDBSCAN clustering algorithm utilized by BERTopic and find that it classifies a majority of the documents as outliers. This crucial, yet overseen problem excludes too many documents from further analysis. When we replace HDBSCAN with k-Means, we achieve similar performance, but without outliers.
translated by 谷歌翻译
With the progress of sensor technology in wearables, the collection and analysis of PPG signals are gaining more interest. Using Machine Learning, the cardiac rhythm corresponding to PPG signals can be used to predict different tasks such as activity recognition, sleep stage detection, or more general health status. However, supervised learning is often limited by the amount of available labeled data, which is typically expensive to obtain. To address this problem, we propose a Self-Supervised Learning (SSL) method with a pretext task of signal reconstruction to learn an informative generalized PPG representation. The performance of the proposed SSL framework is compared with two fully supervised baselines. The results show that in a very limited label data setting (10 samples per class or less), using SSL is beneficial, and a simple classifier trained on SSL-learned representations outperforms fully supervised deep neural networks. However, the results reveal that the SSL-learned representations are too focused on encoding the subjects. Unfortunately, there is high inter-subject variability in the SSL-learned representations, which makes working with this data more challenging when labeled data is scarce. The high inter-subject variability suggests that there is still room for improvements in learning representations. In general, the results suggest that SSL may pave the way for the broader use of machine learning models on PPG data in label-scarce regimes.
translated by 谷歌翻译
Satellite image analysis has important implications for land use, urbanization, and ecosystem monitoring. Deep learning methods can facilitate the analysis of different satellite modalities, such as electro-optical (EO) and synthetic aperture radar (SAR) imagery, by supporting knowledge transfer between the modalities to compensate for individual shortcomings. Recent progress has shown how distributional alignment of neural network embeddings can produce powerful transfer learning models by employing a sliced Wasserstein distance (SWD) loss. We analyze how this method can be applied to Sentinel-1 and -2 satellite imagery and develop several extensions toward making it effective in practice. In an application to few-shot Local Climate Zone (LCZ) prediction, we show that these networks outperform multiple common baselines on datasets with a large number of classes. Further, we provide evidence that instance normalization can significantly stabilize the training process and that explicitly shaping the embedding space using supervised contrastive learning can lead to improved performance.
translated by 谷歌翻译